Wednesday, 07 March 2018

REST with HTTP/2

HTTP has become one of the most successful and heavily used network protocols around the world. Version 1.0 was created in 1996 and received a minor update 3 years later. But it took more than a decade to create HTTP/2 (which was approved in 2015). Why did it take so long? Well, I wouldn’t tell you all the truth if I didn’t mention an experimental protocol, called SPDY. SPDY was primarily focused on improving performance. The initial results were very promising and inside Google’s lab, the developers measured 55% speed improvement. This work and experience was converted into HTTP/2 proposal back in 2012. A few years later, we can all use HTTP/2 (sometimes called h2) along with its older brother - HTTP/1.1.

Main differences between HTTP/1.1 and HTTP/2

image

HTTP/1.1 is a text-based protocol. Sometimes this is very convenient, since you can use low level tools, such as Telnet, for hacking. But it doesn’t work very well for transporting large, binary payloads. HTTP/2 solves this problem by using a completely redesigned architecture. Each HTTP message (a request or a response) consists of one or more frames. A frame is the smallest portion of data travelling through a TCP connection. A set of messages is aggregated into a, so called stream.

image

HTTP/2 allows to lower the number of physical connections between the server and the client by multiplexing logical connections into one TCP connection. Streams allow the server to recognize, which frame belongs to which conversation.

How to connect using HTTP/2?

There are two ways for starting an HTTP/2 conversation.

The first one, and the most commonly used one, is TLS/ALPN. During TLS handshake the server and the client negotiate protocol for further communication. Unfortunately JDK below 9 doesn’t support it by default (there are a couple of workarounds but please refer to your favorite HTTP client’s manual to find some suggestions).

The second one, much less popular, is so called plain text upgrade. During HTTP/1.1 conversation, the client issues an HTTP/1.1 Upgrade header and proposes new conversation protocol. If the server agrees, they start using it. If not, they stick with HTTP/1.1.

The good news is that Infinispan supports both those upgrade paths. Thanks to the ALPN Hack Engine (the credit goes to Stuart Douglas from the Wildfly Team), we support TLS/ALPN without any bootstrap classpath modification.

Configuring Infinispan server for HTTP/2

Infinispan’s REST server already supports plain text upgrades out of the box. TLS/ALPN however, requires additional configuration since the server needs to use a Keystore. In order to make it even more convenient, we support generating keystores automatically when needed. Here’s an example showing how to configure a security realm:

The next step is to bind the security realm to a REST endpoint:

You may also use one of our configuration examples. The easiest way to get it working is to use our Docker image:

Let’s explain a couple of things from the command above:

  • -e "APP_USER=test" - This is a user name we will be used for REST authentication.

  • -e "APP_PASS=test" - Corresponding password.

  • ../../docs/examples/configs/standalone-rest-ssl.xml - Here is a ready-to-go configuration with REST and TLS/ALPN support

Unfortunately, HTTP/2 functionality has been broken in 9.2.0.Final. But we promise to fix it as soon as we can :) Please use 9.1.5.Final in the meantime.

Testing using CURL

Curl is one of my favorite tools. It’s very simple, powerful, and… it supports HTTP/2. Assuming that you already started Infinispan server using docker run command, you can put something into the cache:

Once, it’s there, let’s try to get it back:

Let’s analyze CURL switches one by one:

  • -k - Ignores certificate validation. All automatically generated certificates and self-signed and not trusted by default.

  • -v - Debug logging.

  • -u test:test - Username and password for authentication.

  • -d test - This is the payload when invoking HTTP POST.

  • -H “Accept: text/plain” - This tells the server what type of data we’d like to get in return.

I hope you enjoyed this small tutorial about HTTP/2. I highly encourage you to have a look at the links below to learn some more things about this topic. You may also measure the performance of your app when using HTTP/1.1 and HTTP/2. You will be surprised!

Posted by Sebastian Łaskawiec on 2018-03-07
Tags: docker server http/2 rest

Tuesday, 06 March 2018

Accessing Infinispan inside Docker for Mac

Connecting to Infinispan instances that run inside Docker for Mac using the Java Hot Rod client can be tricky. In this blog post we’ll be analyzing what makes this environment tricky and how to get around the issue.

The tricky thing about Docker for Mac is that internal container IP addresses are not accessible externally. This is a known issue and it can be hard to workaround it. In container orchestrators such as Openshift, you can use Routes to allow external access to the containers. However, if running vanilla Docker for Mac, the simplest option is to map ports over to the local machine.

Why is this important? When someone connects using the Hot Rod protocol, the server returns the current topology to the client. When Infinispan runs inside of Docker, this topology by default contains internal IP addresses. Since those are not accessible externally in Docker for Mac, the client won’t be able to connect.

To workaround the issue, Infinispan server Hot Rod endpoint can be configured with external host/port combination, but doing this would require modifying the server’s configuration. A simpler method to get around the issue is to configure the client’s intelligence to be Basic. By doing this the server won’t send topology updates nor will the client be able to locate where keys are located using hashing. This has a negative performance impact since all requests to Infinispan single server or server cluster would need to go over the same IP+port. However, for demo or sample applications on Mac environments, this is reasonable thing to do.

So, how do we do all of this?

First, start Infinispan server and map Hot Rod’s default port 11222 to the local 11222 port:

docker run -it -p 11222:11222 jboss/infinispan-server:9.2.0.Final

Open your IDE and create a project with this dependencies:

Finally, create a class that connects to Infinispan and does a simple put/get sequence:

Cheers, Galder

Posted by Galder Zamarreño on 2018-03-06
Tags: docker mac

Monday, 20 March 2017

Memory and CPU constraints inside a Docker Container

In one of the previous blog posts we wrote about different configuration options for our Docker image. Now we did another step adding auto-configuration steps for memory and CPU constraints.

Before we dig in…​

Setting memory and CPU constraints to containers is very popular technique especially for public cloud offerings (such as OpenShift). Behind the scenes everything works based on adding additional Docker settings to the containers. There are two very popular switches: --memory (which is responsible for setting the amount of available memory) and --cpu-quota (which throttles CPU usage).

Now here comes the best part…​ JDK has no idea about those settings! We will probably need to wait until JDK9 for getting full CGroups support.

What can we do about it?

The answer is very simple, we need to tell JDK what is the available memory (at least by setting Xmx) and available number of CPUs (by setting XX:ParallelGCThreadsXX:ConcGCThreads and Djava.util.concurrent.ForkJoinPool.common.parallelism).

And we have some very good news! We already did it for you!

Let’s test it out!

At first you need to pull our latest Docker image:

Then run it with CPU and memory limits using the following command:

Note that JAVA_OPTS variable was overridden. Let’s have a look what had happened:

  • -Xms64m -Xmx350m - it is always a good idea to set Xmn inside a Docker container. Next we set Xmx to 70% of available memory. 

  • -XX:ParallelGCThreads=6 -XX:ConcGCThreads=6 -Djava.util.concurrent.ForkJoinPool.common.parallelism=6 - The next thing is setting CPU throttling as I explained above.

There might be some cases where you wouldn’t like to set those properties automatically. In that case, just pass -n switch to the starter script:

More reading

If this topic sounds interesting to you, do not forget to have a look at those links:

  • A great series of articles about memory and CPU in the containers by Andrew Dinn

  • A practical implementation by Fabric8 Team

  • A great article about memory limits by Rafael Benevides

  • OpenShift guidelines for creating Docker images

Posted by Sebastian Łaskawiec on 2017-03-20
Tags: docker openshift kubernetes

Wednesday, 20 July 2016

Improved Infinispan Docker image available

image

The Infinispan Docker image has been improved, making it easier to run Infinispan Servers in clustered, domain and standalone modes, with different protocol stacks.

In this blog post we’ll show a few usage scenarios and how to combine it with the jgroups-gossip image to create Infinispan Server clusters in docker based environments.

==== Getting started

By default the container runs in clustered mode, and to start a node simply execute:

Bringing a second container will cause it to form a cluster.The membership can be verified by running a command directly in the newly launched container:

Example output:

==== Using a different JGroups stack

The command above creates a cluster with the default JGroups stack (UDP), but it’s possible to pick another one provided it’s supported by the server. For example, to use TCP:

==== Running on cloud environments

We recently dockerized the JGroups Gossip Router to be used as an alternative discovery mechanism in environments where multicast is not enabled, such as cloud environments.

Employing  a gossip router will enable discovery via TCP, where the router acts as a registry: each member will register itself in this registry upon start and also discover other members.

The gossip router container can be launched by:     

Take note of the address where the router will bind to, it’s needed by the Infinispan nodes. The address can be easily obtained by:    

Finally we can now launch our cluster specifying the tcp-gossip stack with the location of the gossip router:

==== Launching Standalone mode

Passing an extra parameter allows to run a server in standalone (non-clustered) mode:

==== Server Management Console in Domain mode

Domain mode is a special case of clustered mode (and currently a requirement to use the Server Management Console), that involves launching a domain controller process plus one or more host controller processes. The domain controller does not hold data, it is used as a centralized management process that can replicate configuration and provision servers on the host controllers.

Running a domain controller is easily achievable with a parameter:

Once the domain controller is running, it’s possible to start one or more host controllers. In the default configuration, each host controller has two Infinispan server instances:

The command line interface can be used to verify the hosts managed in the domain:

It should output all the host names that are part of the domain, including the master (domain controller):

To get access to the Management console, use credentials admin/admin and go to port 9990 of the domain controller, for example: http://172.17.0.2:9990/

==== Versions

The image is built on Dockerhub shortly after each Infinispan release (stable and unstable), and the improvements presented in this post are available for Infinispan 9.0.0.Alpha3 and Infinispan 8.2.3.Final. As a reminder, make sure to pick the right version when launching containers:

Getting involved

The image was created to be flexible and easy to use, but if something is not working for you or if you have any suggestions to improve it, please report it at https://github.com/jboss-dockerfiles/infinispan/issues/

Enjoy!

Posted by Gustavo on 2016-07-20
Tags: docker console domain mode server jgroups

Wednesday, 20 July 2016

Bleeding edge on Docker

As you may have noticed our Docker images are published together with (or very soon after) releases. But what if you want to try out some brand features which have just been merged? In that case you need to build an image by yourself.

Step #1 - Clone JBoss Docker image repository

At first you will need to clone our Infinispan Docker images:

==

Step #2 - Build or download the latest SNAPSHOT

There are two options here - you can build the distribution yourself or use SNAPSHOTs available on JBoss Nexus repository.

The first option requires checking out the Infinispan source code and performing a Maven build:

The second one is much simpler (Infinispan SNAPSHOTs are pushed into the repository after each successful build:

==== Step #3 - building Infinispan Docker image

One of the steps for building Infinispan Docker image is to download the distribution from Infinispan Download page. We need to slightly modify this step and use our manually downloaded packages.

Modify the Dockerfile as shown below:

Now you are ready to invoke the Docker build:

Conclusion

As you can see building a SNAPSHOT based docker image is very simple. From my own experience I can tell you that pushing it into Docker Hub is the fastest way to start playing with it in any PaaS environment (e.g. Openshift Online)

Happy building!

Posted by Sebastian Łaskawiec on 2016-07-20
Tags: docker

News

Tags

JUGs alpha as7 asymmetric clusters asynchronous beta c++ cdi chat clustering community conference configuration console data grids data-as-a-service database devoxx distributed executors docker event functional grouping and aggregation hotrod infinispan java 8 jboss cache jcache jclouds jcp jdg jpa judcon kubernetes listeners meetup minor release off-heap openshift performance presentations product protostream radargun radegast recruit release release 8.2 9.0 final release candidate remote query replication queue rest query security spring streams transactions vert.x workshop 8.1.0 API DSL Hibernate-Search Ickle Infinispan Query JP-QL JSON JUGs JavaOne LGPL License NoSQL Open Source Protobuf SCM administration affinity algorithms alpha amazon anchored keys annotations announcement archetype archetypes as5 as7 asl2 asynchronous atomic maps atomic objects availability aws beer benchmark benchmarks berkeleydb beta beta release blogger book breizh camp buddy replication bugfix c# c++ c3p0 cache benchmark framework cache store cache stores cachestore cassandra cdi cep certification cli cloud storage clustered cache configuration clustered counters clustered locks codemotion codename colocation command line interface community comparison compose concurrency conference conferences configuration console counter cpp-client cpu creative cross site replication csharp custom commands daas data container data entry data grids data structures data-as-a-service deadlock detection demo deployment dev-preview development devnation devoxx distributed executors distributed queries distribution docker documentation domain mode dotnet-client dzone refcard ec2 ehcache embedded embedded query equivalence event eviction example externalizers failover faq final fine grained flags flink full-text functional future garbage collection geecon getAll gigaspaces git github gke google graalvm greach conf gsoc hackergarten hadoop hbase health hibernate hibernate ogm hibernate search hot rod hotrod hql http/2 ide index indexing india infinispan infinispan 8 infoq internationalization interoperability interview introduction iteration javascript jboss as 5 jboss asylum jboss cache jbossworld jbug jcache jclouds jcp jdbc jdg jgroups jopr jpa js-client jsr 107 jsr 347 jta judcon kafka kubernetes lambda language learning leveldb license listeners loader local mode lock striping locking logging lucene mac management map reduce marshalling maven memcached memory migration minikube minishift minor release modules mongodb monitoring multi-tenancy nashorn native near caching netty node.js nodejs non-blocking nosqlunit off-heap openshift operator oracle osgi overhead paas paid support partition handling partitioning performance persistence podcast presentation presentations protostream public speaking push api putAll python quarkus query quick start radargun radegast react reactive red hat redis rehashing releaase release release candidate remote remote events remote query replication rest rest query roadmap rocksdb ruby s3 scattered cache scripting second level cache provider security segmented server shell site snowcamp spark split brain spring spring boot spring-session stable standards state transfer statistics storage store store by reference store by value streams substratevm synchronization syntax highlighting tdc testing tomcat transactions tutorial uneven load user groups user guide vagrant versioning vert.x video videos virtual nodes vote voxxed voxxed days milano wallpaper websocket websockets wildfly workshop xsd xsite yarn zulip

back to top